15 research outputs found

    Universal Quantum Speedup for Branch-and-Bound, Branch-and-Cut, and Tree-Search Algorithms

    Full text link
    Mixed Integer Programs (MIPs) model many optimization problems of interest in Computer Science, Operations Research, and Financial Engineering. Solving MIPs is NP-Hard in general, but several solvers have found success in obtaining near-optimal solutions for problems of intermediate size. Branch-and-Cut algorithms, which combine Branch-and-Bound logic with cutting-plane routines, are at the core of modern MIP solvers. Montanaro proposed a quantum algorithm with a near-quadratic speedup compared to classical Branch-and-Bound algorithms in the worst case, when every optimal solution is desired. In practice, however, a near-optimal solution is satisfactory, and by leveraging tree-search heuristics to search only a portion of the solution tree, classical algorithms can perform much better than the worst-case guarantee. In this paper, we propose a quantum algorithm, Incremental-Quantum-Branch-and-Bound, with universal near-quadratic speedup over classical Branch-and-Bound algorithms for every input, i.e., if classical Branch-and-Bound has complexity QQ on an instance that leads to solution depth dd, Incremental-Quantum-Branch-and-Bound offers the same guarantees with a complexity of O~(Qd)\tilde{O}(\sqrt{Q}d). Our results are valid for a wide variety of search heuristics, including depth-based, cost-based, and AA^{\ast} heuristics. Universal speedups are also obtained for Branch-and-Cut as well as heuristic tree search. Our algorithms are directly comparable to commercial MIP solvers, and guarantee near quadratic speedup whenever QdQ \gg d. We use numerical simulation to verify that QdQ \gg d for typical instances of the Sherrington-Kirkpatrick model, Maximum Independent Set, and Portfolio Optimization; as well as to extrapolate the dependence of QQ on input size parameters. This allows us to project the typical performance of our quantum algorithms for these important problems.Comment: 25 pages, 5 figure

    Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels

    Full text link
    A quantum neural network (QNN) is a parameterized mapping efficiently implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers. It can be used for supervised learning when combined with classical gradient-based optimizers. Despite the existing empirical and theoretical investigations, the convergence of QNN training is not fully understood. Inspired by the success of the neural tangent kernels (NTKs) in probing into the dynamics of classical neural networks, a recent line of works proposes to study over-parameterized QNNs by examining a quantum version of tangent kernels. In this work, we study the dynamics of QNNs and show that contrary to popular belief it is qualitatively different from that of any kernel regression: due to the unitarity of quantum operations, there is a non-negligible deviation from the tangent kernel regression derived at the random initialization. As a result of the deviation, we prove the at-most sublinear convergence for QNNs with Pauli measurements, which is beyond the explanatory power of any kernel regression dynamics. We then present the actual dynamics of QNNs in the limit of over-parameterization. The new dynamics capture the change of convergence rate during training and implies that the range of measurements is crucial to the fast QNN convergence

    Sublinear classical and quantum algorithms for general matrix games

    Full text link
    We investigate sublinear classical and quantum algorithms for matrix games, a fundamental problem in optimization and machine learning, with provable guarantees. Given a matrix ARn×dA\in\mathbb{R}^{n\times d}, sublinear algorithms for the matrix game minxXmaxyYyAx\min_{x\in\mathcal{X}}\max_{y\in\mathcal{Y}} y^{\top} Ax were previously known only for two special cases: (1) Y\mathcal{Y} being the 1\ell_{1}-norm unit ball, and (2) X\mathcal{X} being either the 1\ell_{1}- or the 2\ell_{2}-norm unit ball. We give a sublinear classical algorithm that can interpolate smoothly between these two cases: for any fixed q(1,2]q\in (1,2], we solve the matrix game where X\mathcal{X} is a q\ell_{q}-norm unit ball within additive error ϵ\epsilon in time O~((n+d)/ϵ2)\tilde{O}((n+d)/{\epsilon^{2}}). We also provide a corresponding sublinear quantum algorithm that solves the same task in time O~((n+d)poly(1/ϵ))\tilde{O}((\sqrt{n}+\sqrt{d})\textrm{poly}(1/\epsilon)) with a quadratic improvement in both nn and dd. Both our classical and quantum algorithms are optimal in the dimension parameters nn and dd up to poly-logarithmic factors. Finally, we propose sublinear classical and quantum algorithms for the approximate Carath\'eodory problem and the q\ell_{q}-margin support vector machines as applications.Comment: 16 pages, 2 figures. To appear in the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021

    Parameter Setting in Quantum Approximate Optimization of Weighted Problems

    Full text link
    Quantum Approximate Optimization Algorithm (QAOA) is a leading candidate algorithm for solving combinatorial optimization problems on quantum computers. However, in many cases QAOA requires computationally intensive parameter optimization. The challenge of parameter optimization is particularly acute in the case of weighted problems, for which the eigenvalues of the phase operator are non-integer and the QAOA energy landscape is not periodic. In this work, we develop parameter setting heuristics for QAOA applied to a general class of weighted problems. First, we derive optimal parameters for QAOA with depth p=1p=1 applied to the weighted MaxCut problem under different assumptions on the weights. In particular, we rigorously prove the conventional wisdom that in the average case the first local optimum near zero gives globally-optimal QAOA parameters. Second, for p1p\geq 1 we prove that the QAOA energy landscape for weighted MaxCut approaches that for the unweighted case under a simple rescaling of parameters. Therefore, we can use parameters previously obtained for unweighted MaxCut for weighted problems. Finally, we prove that for p=1p=1 the QAOA objective sharply concentrates around its expectation, which means that our parameter setting rules hold with high probability for a random weighted instance. We numerically validate this approach on general weighted graphs and show that on average the QAOA energy with the proposed fixed parameters is only 1.11.1 percentage points away from that with optimized parameters. Third, we propose a general heuristic rescaling scheme inspired by the analytical results for weighted MaxCut and demonstrate its effectiveness using QAOA with the XY Hamming-weight-preserving mixer applied to the portfolio optimization problem. Our heuristic improves the convergence of local optimizers, reducing the number of iterations by 7.2x on average

    Quantum algorithm for estimating volumes of convex bodies

    Full text link
    Estimating the volume of a convex body is a central problem in convex geometry and can be viewed as a continuous version of counting. We present a quantum algorithm that estimates the volume of an nn-dimensional convex body within multiplicative error ϵ\epsilon using O~(n3+n2.5/ϵ)\tilde{O}(n^{3}+n^{2.5}/\epsilon) queries to a membership oracle and O~(n5+n4.5/ϵ)\tilde{O}(n^{5}+n^{4.5}/\epsilon) additional arithmetic operations. For comparison, the best known classical algorithm uses O~(n4+n3/ϵ2)\tilde{O}(n^{4}+n^{3}/\epsilon^{2}) queries and O~(n6+n5/ϵ2)\tilde{O}(n^{6}+n^{5}/\epsilon^{2}) additional arithmetic operations. To the best of our knowledge, this is the first quantum speedup for volume estimation. Our algorithm is based on a refined framework for speeding up simulated annealing algorithms that might be of independent interest. This framework applies in the setting of "Chebyshev cooling", where the solution is expressed as a telescoping product of ratios, each having bounded variance. We develop several novel techniques when implementing our framework, including a theory of continuous-space quantum walks with rigorous bounds on discretization error. To complement our quantum algorithms, we also prove that volume estimation requires Ω(n+1/ϵ)\Omega(\sqrt n+1/\epsilon) quantum membership queries, which rules out the possibility of exponential quantum speedup in nn and shows optimality of our algorithm in 1/ϵ1/\epsilon up to poly-logarithmic factors.Comment: 61 pages, 8 figures. v2: Quantum query complexity improved to O~(n3+n2.5/ϵ)\tilde{O}(n^{3}+n^{2.5}/\epsilon) and number of additional arithmetic operations improved to O~(n5+n4.5/ϵ)\tilde{O}(n^{5}+n^{4.5}/\epsilon). v3: Improved Section 4.3.3 on nondestructive mean estimation and Section 6 on quantum lower bounds; various minor change

    Parameter Setting in Quantum Approximate Optimization of Weighted Problems

    Get PDF
    Quantum Approximate Optimization Algorithm (QAOA) is a leading candidate algorithm for solving combinatorial optimization problems on quantum computers. However, in many cases QAOA requires computationally intensive parameter optimization. The challenge of parameter optimization is particularly acute in the case of weighted problems, for which the eigenvalues of the phase operator are non-integer and the QAOA energy landscape is not periodic. In this work, we develop parameter setting heuristics for QAOA applied to a general class of weighted problems. First, we derive optimal parameters for QAOA with depth p=1p=1 applied to the weighted MaxCut problem under different assumptions on the weights. In particular, we rigorously prove the conventional wisdom that in the average case the first local optimum near zero gives globally-optimal QAOA parameters. Second, for p1p\geq 1 we prove that the QAOA energy landscape for weighted MaxCut approaches that for the unweighted case under a simple rescaling of parameters. Therefore, we can use parameters previously obtained for unweighted MaxCut for weighted problems. Finally, we prove that for p=1p=1 the QAOA objective sharply concentrates around its expectation, which means that our parameter setting rules hold with high probability for a random weighted instance. We numerically validate this approach on general weighted graphs and show that on average the QAOA energy with the proposed fixed parameters is only 1.11.1 percentage points away from that with optimized parameters. Third, we propose a general heuristic rescaling scheme inspired by the analytical results for weighted MaxCut and demonstrate its effectiveness using QAOA with the XY Hamming-weight-preserving mixer applied to the portfolio optimization problem. Our heuristic improves the convergence of local optimizers, reducing the number of iterations by 7.4x on average
    corecore